Reasoning about Intended Actions
نویسندگان
چکیده
In most research on reasoning about actions and reasoning about narratives one either reasons about hypothetical execution of actions, or about actions that actually occurred. In this paper we first develop a high level language that allows the expression of intended or planned action sequences. Unlike observed action occurrences, planned or intended action occurrences may not actually take place. But often when they do not take place, they persist, and happen at an opportune future time. We give the syntax and semantics for expressing such intentions. We then give a logic programming axiomatization and show the correspondence between the semantics of a description in the high level language, and the answer sets of the corresponding logic programming axiomatization. We illustrate the application of our formalism with respect to reasoning about trips. Introduction and Motivation In reasoning about actions (for example, (?; ?) and reasoning about narratives we often reason about action sequences that are executed in a particular situation, or actions that happened at particular time points. Alternatively, there have been some work on reasoning about natural actions (?) and actions that are triggered. In this paper we consider intended execution of actions and formalize how to reason about such intentions. To motivate this further, consider a narrative where an agent intended to execute action a at time point i. A commonsense reasoner looking back at this intention would conclude that the agent must have executed action a at time point i. To ground this example, consider that the wife of our reasoner says that she intends to leave work at 5 PM. At 6 PM the commonsense reasoner would conclude that his wife must have left at 5 PM. Now suppose the reasoner checks his email and finds an email from his wife saying that she has been held up in a meeting and later gets information that the meeting ended at 5:30. The reasoner would then conclude that his wife must have left at 5:30 PM. I.e., her intended action, since it became impossible at the initially intended time point, must have persisted and executed at the next time point when it became executable. Copyright c © 2005, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Now let us generalize this to a sequence of actions where an agent intends to execute a sequence a1, . . . , an at time point i. Now what if it happens (the world evolved in such a way) that the executability condition of ak is not true at the time point where ak−1 ended. Does this mean the agent abandoned his intention to execute ak, . . . , an? It seems to us that most agents, if they failed to execute their intended action ak after the execution of ak−1, would execute ak in the next possible time point when it became executable. As before, let us consider a more grounded example. John is supposed to have taken flight A to B and then take a connection from B to C. Suppose Peter finds out that John’s flight from A to B was late. Once Peter knows when exactly John reached B, his reasoning would be that John would have taken the next flight from B to C. In other words, failure to go from B to C at a particular time point, does not mean that John would have abandoned his intention to go from B to C; rather most likely he would have just done it at the next possible time point. This actually happened to one of the authors. He correctly guessed that his wife would take the next flight (after missing a connection) and was able to meet her at the airport when the next flight arrived. In most earlier work on reasoning about actions and narratives (for example, (?)), if one or many of the actions in a given sequence a1, . . . , an are not executable or otherwise prevented from execution then the reasoning process rigidly assumes that either the actions were not executed or considers the domain to be inconsistent. The formulation there is appropriate with respect to the assumptions in those languages. Here we consider the new notion of “intended (or planned) execution of actions”, which needs a different formalization. In this we can take pointers from prior studies on intentions (?; ?). In particular, intentions have been studied from the point of view of the design of rational agents (?), and they are one of the three main components of BDI (Belief-Desire-Intention) agents. In (?), various properties of ‘intentions’ of a rational agent is discussed. In particular the author says: Summarizing, we can see that intentions play the following important roles in practical reasoning • Intentions drive means-ends reasoning. If I have formed an intention, then I will attempt to achieve the intention, ... • Intentions persist. I will not usually give up on my intentions without good reason – they will persist, ...
منابع مشابه
Activity Recognition with Intended Actions, Answer Set Programming Approach
We consider an activity recognition problem as follows. We are given a description of the action capabilities of an agent being observed. This description includes a description of the preconditions and effects of atomic actions the agent may execute and a description of activities (sequences of actions). Given this description and a set of propositions about action occurrences and intended act...
متن کاملTowards the Assessment of Logics for Concurrent Actions
We have introduced concurrency into the framework of Sandewall. The resulting formalism is capable of reasoning about interdependent as well as independent concurrent actions. Following Sandewall’s systematical method, we have then applied the entailment criterion PCM to selecting intended models of common sense theories where concurrent actions axe allowed, and proved that the criterion leads ...
متن کاملLearning the axiomatic reasoning about mental states assists the emotional development of the autistic patients
It has been discovered about a decade ago that autistic children cannot reason properly about mental states and mental actions of themselves and others (Baron-Cohen 1988, Leslie 1987). In particular, autistic patients demonstrate the lack of perceiving the emotions of themselves and others as well as the reduced emotional behavior. We extend the psychological observations and the " theory of mi...
متن کاملReasoning about Actions and Change with Ramiication
The systematic approach to nonmonotonic logics of actions and change attempts to identify, for each proposed logic, what is its range of applicability relative to a precise deenition of intended models. The article describes the concepts and methods that are used in this approach, and examples of assessed range of applicability for a few of the previously proposed logics in the case of strict i...
متن کاملActivity Recognition with Intended Actions
The following activity recognition problem is considered: a description of the action capabilities of an agent being observed is given. This includes the preconditions and effects of atomic actions and of the activities (sequences of actions) the agent may execute. Given this description and a set of propositions, called history, about action occurrences, intended actions and properties of the ...
متن کامل